12 research outputs found

    Групповой состав дизельного топлива, как фактор определяющий эффективность действия депрессорных присадок

    Get PDF
    In this paper, we present a content-based image retrieval system designed to retrieve mammographies from large medical image database. The system is developed based on breast density, according to the four categories defined by the American College of Radiology, and is integrated to the database of the Image Retrieval in Medical Applications (IRMA) project, that provides images with classification ground truth. Two-dimensional principal component analysis is used in breast density texture characterization, in order to effectively represent texture and allow for dimensionality reduction. A support vector machine is used to perform the retrieval process. Average precision rates are in the range from 83% to 97% considering a data set of 5024 images. The results indicate the potential of the system as the first stage of a computer-aided diagnosis framework

    Measuring self-regulation in everyday life: reliability and validity of smartphone-based experiments in alcohol use disorder

    Get PDF
    Self-regulation, the ability to guide behavior according to one’s goals, plays an integral role in understanding loss of control over unwanted behaviors, for example in alcohol use disorder (AUD). Yet, experimental tasks that measure processes underlying self-regulation are not easy to deploy in contexts where such behaviors usually occur, namely outside the laboratory, and in clinical populations such as people with AUD. Moreover, lab-based tasks have been criticized for poor test–retest reliability and lack of construct validity. Smartphones can be used to deploy tasks in the field, but often require shorter versions of tasks, which may further decrease reliability. Here, we show that combining smartphone-based tasks with joint hierarchical modeling of longitudinal data can overcome at least some of these shortcomings. We test four short smartphone-based tasks outside the laboratory in a large sample (N = 488) of participants with AUD. Although task measures indeed have low reliability when data are analyzed traditionally by modeling each session separately, joint modeling of longitudinal data increases reliability to good and oftentimes excellent levels. We next test the measures’ construct validity and show that extracted latent factors are indeed in line with theoretical accounts of cognitive control and decision-making. Finally, we demonstrate that a resulting cognitive control factor relates to a real-life measure of drinking behavior and yields stronger correlations than single measures based on traditional analyses. Our findings demonstrate how short, smartphone-based task measures, when analyzed with joint hierarchical modeling and latent factor analysis, can overcome frequently reported shortcomings of experimental tasks

    Measuring self-regulation in everyday life: Reliability and validity of smartphone-based experiments in alcohol use disorder

    Get PDF
    Self-regulation, the ability to guide behavior according to one's goals, plays an integral role in understanding loss of control over unwanted behaviors, for example in alcohol use disorder (AUD). Yet, experimental tasks that measure processes underlying self-regulation are not easy to deploy in contexts where such behaviors usually occur, namely outside the laboratory, and in clinical populations such as people with AUD. Moreover, lab-based tasks have been criticized for poor test-retest reliability and lack of construct validity. Smartphones can be used to deploy tasks in the field, but often require shorter versions of tasks, which may further decrease reliability. Here, we show that combining smartphone-based tasks with joint hierarchical modeling of longitudinal data can overcome at least some of these shortcomings. We test four short smartphone-based tasks outside the laboratory in a large sample (N = 488) of participants with AUD. Although task measures indeed have low reliability when data are analyzed traditionally by modeling each session separately, joint modeling of longitudinal data increases reliability to good and oftentimes excellent levels. We next test the measures' construct validity and show that extracted latent factors are indeed in line with theoretical accounts of cognitive control and decision-making. Finally, we demonstrate that a resulting cognitive control factor relates to a real-life measure of drinking behavior and yields stronger correlations than single measures based on traditional analyses. Our findings demonstrate how short, smartphone-based task measures, when analyzed with joint hierarchical modeling and latent factor analysis, can overcome frequently reported shortcomings of experimental tasks

    Real-Space Mesh Techniques in Density Functional Theory

    Full text link
    This review discusses progress in efficient solvers which have as their foundation a representation in real space, either through finite-difference or finite-element formulations. The relationship of real-space approaches to linear-scaling electrostatics and electronic structure methods is first discussed. Then the basic aspects of real-space representations are presented. Multigrid techniques for solving the discretized problems are covered; these numerical schemes allow for highly efficient solution of the grid-based equations. Applications to problems in electrostatics are discussed, in particular numerical solutions of Poisson and Poisson-Boltzmann equations. Next, methods for solving self-consistent eigenvalue problems in real space are presented; these techniques have been extensively applied to solutions of the Hartree-Fock and Kohn-Sham equations of electronic structure, and to eigenvalue problems arising in semiconductor and polymer physics. Finally, real-space methods have found recent application in computations of optical response and excited states in time-dependent density functional theory, and these computational developments are summarized. Multiscale solvers are competitive with the most efficient available plane-wave techniques in terms of the number of self-consistency steps required to reach the ground state, and they require less work in each self-consistency update on a uniform grid. Besides excellent efficiencies, the decided advantages of the real-space multiscale approach are 1) the near-locality of each function update, 2) the ability to handle global eigenfunction constraints and potential updates on coarse levels, and 3) the ability to incorporate adaptive local mesh refinements without loss of optimal multigrid efficiencies.Comment: 70 pages, 11 figures. To be published in Reviews of Modern Physic

    Machine learning for automatic construction of pediatric abdominal phantoms for radiation dose reconstruction

    No full text
    The advent of Machine Learning (ML) is proving extremely beneficial in many healthcare applications. In pediatric oncology, retrospective studies that investigate the relationship between treatment and late adverse effects still rely on simple heuristics. To capture the effects of radiation treatment, treatment plans are typically simulated on virtual surrogates of patient anatomy called phantoms. Currently, phantoms are built to represent categories of patients based on reasonable yet simple criteria. This often results in phantoms that are too generic to accurately represent individual anatomies. We present a novel approach that combines imaging data and ML to build individualized phantoms automatically. We design a pipeline that, given features of patients treated in the pre-3D planning era when only 2D radiographs were available, as well as a database of 3D Computed Tomography (CT) imaging with organ segmentations, uses ML to predict how to assemble a patient-specific phantom. Using 60 abdominal CTs of pediatric patients between 2 to 6 years of age, we find that our approach delivers significantly more representative phantoms compared to using current phantom building criteria, in terms of shape and location of two considered organs (liver and spleen), and shape of the abdomen. Furthermore, as interpretability is often central to trust ML models in medical contexts, among other ML algorithms we consider the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic Programming (GP-GOMEA), that learns readable mathematical expression models. We find that the readability of its output does not compromise prediction performance as GP-GOMEA delivered the best performing models

    Estimating the reproducibility of psychological science

    Get PDF
    One of the central goals in any scientific endeavor is to understand causality. Experiments that seek to demonstrate a cause/effect relation most often manipulate the postulated causal factor. Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study

    Data from: Estimating the reproducibility of psychological science

    No full text
    This record contains the underlying research data for the publication "Estimating the reproducibility of psychological science" and the full-text is available from: https://ink.library.smu.edu.sg/lkcsb_research/5257Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams
    corecore